Machine Learning

Types of algorithms

  • Supersived
    • Regression
    • Classification
  • Unsupervised
    • Clusterization
  • ...

Univariate linear regression

Problem formulation


In [9]:
%%latex
\begin{aligned}
Hypothesis: \\
    h_\theta(x) = \theta_0 + \theta_1 x \\
\\
Parameters: \\
    \theta_0, \theta_1 \\
\\
Cost function (error): \\
    J(\theta_0, \theta_1) = \frac{1}{2m} \sum_{i = 1}^{m} ( h_\theta(x^{(i)}) - y^{(i)} )^{2} \\
\\
Goal: \\
    \underset{\theta_0, \theta_1}{min} J(\theta_0, \theta_1) \\
\end{aligned}


\begin{aligned} Hypothesis: \\ h_\theta(x) = \theta_0 + \theta_1 x \\ \\ Parameters: \\ \theta_0, \theta_1 \\ \\ Cost function (error): \\ J(\theta_0, \theta_1) = \frac{1}{2m} \sum_{i = 1}^{m} ( h_\theta(x^{(i)}) - y^{(i)} )^{2} \\ \\ Goal: \\ \underset{\theta_0, \theta_1}{min} J(\theta_0, \theta_1) \\ \end{aligned}

Algorithms for minimizing the cost funciton

Searching for local minimals

  • Gradient descent

In [11]:
%%latex
\begin{aligned}
    \alpha \rightarrow learning rate (step size)
\end{aligned}


\begin{aligned} \alpha \rightarrow learning rate (step size) \end{aligned}

repeat until convergence {

$$ \theta_j := \theta_j - \alpha \frac{\partial }{\partial \theta_j} J(\theta_0, \theta_1) $$
(simultaneously update j = 0 and j = 1)

}


In [ ]: